Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

Posted by Michael Shick on Server Fault See other posts from Server Fault or by Michael Shick
Published on 2012-11-13T03:53:26Z Indexed on 2012/11/13 5:02 UTC
Read the original article Hit count: 417

Filed under:
|
|
|
|

I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large.

Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great.

Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion.

So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question:

Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole?

Additionally, any other caveats or advice for such a set up is welcome.

I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

© Server Fault or respective owner

Related posts about raid

Related posts about storage